Linear Algebra
Basis Space and Basis Vectorsโ
Imagine these in 2D as 'tiling' a vector space. Imagine making a grid with those long pieces from Erector Sets. You can shear/smush them only at certain angles. Now imagine that the long pieces can only stretch or shrink lengthwise. That's kinda what these are.
Now "applying" a Matrix means that you change this Vector Space and all the vectors you've embedded in it in some way. You shrink it, stretch it, rotate it by some angle, flip it inside-out, or just leave it alone! In some cases, you can even change your mind and smash an undo button called "Commutativity".
Vector Normsโ
These functions give you an idea of the 'size' or 'length' of a vector. There are three kinds:
- Euclidean
- Manhattan
- Infinity
AKA Fuck It I'm Tired Norm
Eigenvalues and Eigenvectorsโ
TODO: Finish this.
Dot and Cross Productsโ
Note that Dot and Cross Products are only defined for Vectors. I mean there are things like the Kronecker Product but that's not what we're dealing with here.
Dot Productsโ
These are easy-peasy and tell you about how well two vectors vibe with each other. The result is a number. Consider two vectors with the same size. If
That's about it. If you get a zero, they're orthogonal (at in 2D space). That Cosine is a good similarity measure that's used in all manner of Machine Learning algos like LLMs. E.g. Recall that , which you can take to mean that they're not similar at all.
Cross Productsโ
These work in 3D for the most part and will give you a new vector that is orthogonal/perpendicular to the plane of the two input vectors (which are 3D!) I've never used them for anything. Read this for more.
Matrix Rankโ
This is an easy concept but is pretty important downstream. It's the number of linearly independent rows or columns of a matrix.
When you do you pick rows versus columns? The smallest of the two: if you have a 'rectangular' matrix (always rows columns), .
A "Full Rank" matrix is one where there are no linearly dependent (not independent!) rows or columns (whichever is smallest). So if you have a matrix that's 4 rows and 3 columns, the maximum rank possible is 3. Now look at the columns and see if you can figure out if one column depends on the other. Didn't find any? Awesome, you have a Full Rank matrix.
Found one that depends on the other? Your rank is 2. Found two? Rank 1. See this Wikipedia article on Row Echelon Forms for more.
Identity Matrixโ
A nice simple square matrix that looks like this.
Determinantsโ
This gives you a scalar (boring-ass number) from your matrix. This number tells you how much applying the matrix will transform the magnitude and direction of an area (2D) or volume (3D+) of a space. If the number's negative you get a mirror image.
TODO: More here...
Commutativityโ
In general, . You can verify this yourself with two matrices. But there are cases where this holds:
- If for some scalar then (i.e. you can scale the Identity Matrix all you want)
- Some diagonal matrices...
Invertible Matrixโ
This is a square matrix that has some other square matrix such that
This other square matrix is the Inverse of and is denoted . It has some properties:
- Its Determinant is not zero.
- It has a Full Rank
- If is some vector (), has only one solution: is full of zeroes!
- If is some vector (), has just one solution
Singular Matrixโ
This is a square matrix () where
- The Determinant is Zero
- It's not Full Rank
- There's some non-zero vector such that
- It is not invertible!
These things smush a vector space into lower dimensions. Well really they create a mapping to a lower space (the original is preserved) but yeah.
Transposesโ
Transposes are when you turn a matrix 's rows into columns and vice-versa and denote the monstrosity . They're just a different kind of transformation and are useful depending on the problem you're trying to solve. They have some properties.
Miscellaneousโ
Other Types of Matricesโ
- An Orthogonal matrix is one where
- A Symmetric matrix is one where
- A Conjugate matrix just flips the sign of the imaginary part of any complex numbers in a matrix.
- A Hermitian matrix is when a matrix equals its Conjugate Transpose:
Pretty important in ML and Quantum Mechanics - TODO: Conjugate and Adjoint matrices...
Cramer's Ruleโ
Easier shown with an example. Heaven forbid you compute things by hand these days...
Replace the -th column of by to get :
By Cramerโs Rule,